Goto

Collaborating Authors

 private firm


Humans must have override power over military AI

#artificialintelligence

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! For years, U.S. defense officials and Washington think-tankers alike have debated whether the future of our military could -- or should -- look a little less human. Already, the U.S. military has started to rely on technology that employs machine learning, artificial intelligence (AI), and big data -- raising ethical questions along the way. While these technologies have countless beneficial applications, ranging from threat assessment to preparing troops for battle, they rightfully evoke concerns about a future in which Terminator-like machines take over.


What Do NLP Researchers Believe? Results of the NLP Community Metasurvey

Michael, Julian, Holtzman, Ari, Parrish, Alicia, Mueller, Aaron, Wang, Alex, Chen, Angelica, Madaan, Divyam, Nangia, Nikita, Pang, Richard Yuanzhe, Phang, Jason, Bowman, Samuel R.

arXiv.org Artificial Intelligence

We present the results of the NLP Community Metasurvey. Run from May to June 2022, the survey elicited opinions on controversial issues, including industry influence in the field, concerns about AGI, and ethics. Our results put concrete numbers to several controversies: For example, respondents are split almost exactly in half on questions about the importance of artificial general intelligence, whether language models understand language, and the necessity of linguistic structure and inductive bias for solving NLP problems. In addition, the survey posed meta-questions, asking respondents to predict the distribution of survey responses. This allows us not only to gain insight on the spectrum of beliefs held by NLP researchers, but also to uncover false sociological beliefs where the community's predictions don't match reality. We find such mismatches on a wide range of issues. Among other results, the community greatly overestimates its own belief in the usefulness of benchmarks and the potential for scaling to solve real-world problems, while underestimating its own belief in the importance of linguistic structure, inductive bias, and interdisciplinary science.



Seoul shares face biometrics of 170M travelers with private firms

#artificialintelligence

The South Korean government shared roughly 170 million face images of citizens and resident foreign nationals with the private sector without their consent to be used in training and testing biometric algorithms, according to a recent Ministry of Justice document. The move is part of an "AI identification and tracking system development project" based on a memorandum of understanding between the Korean Ministry of Justice (MOJ) and the Ministry of Science and ICT (MSIT). Scheduled for completion in 2022, the project has seen the MOJ transferring information obtained during the immigration screening process to the MSIT, including face biometrics, nationality, gender, and age. The MSIT subsequently transferred that information to private businesses for the purpose of artificial intelligence technology research, according to the allegations. The South Korean government mentioned the creation of the project in a press release when it first launched in 2019 but did not disclose information about its structure, scope, or data collection methods.


The Path to Fairer AI Starts With Audits, Standards

#artificialintelligence

Ethical principles aren't enough to defend against the worst potential impacts of artificial intelligence systems and the time has come for the U.S. to establish official legal policies for this emerging technology, said policy and technology experts during a recent report launch event from New America's Open Technology Institute. That work requires clearly defining terms and enforcement measures, and speakers sought to propose mechanisms that can help government promote fairness, accountability and transparency (FAT) in algorithmic systems, as well as outline the challenges that lie ahead. They called for the federal government to regulate how private firms like online content platforms develop and leverage AI as well as establish formal policies for overseeing and vetting the algorithmic systems public agencies adopt and purchase. Such AI audits are currently voluntary, said Spandana Singh, policy analyst at the Open Technology Institute and co-author of the report. AI can deliver newfound efficiencies, extract meaning from troves of data and deliver a variety of other benefits, but the complexity, opacity and lack of foresight in some of these systems means they can be designed, implemented or evolve in ways that produce biased and discriminatory effects.


Indian Government in the Field of AI and Analytics

#artificialintelligence

In the course of the two years, we have seen a consistent increment in the percentage of adoption of AI in India. Given the Indian government's ongoing focus on building up a plan for artificial intelligence, it is recommended to apply strengths (deep analysis of AI applications and implications) to determine (a) the state of AI innovation in India, and (b) strategic insights to help India survive and thrive in a global market with the help of AI initiatives. Advances in artificial intelligence and data analytics are pushing development in numerous parts of the world. China, for instance, has committed $150 billion towards its objective of turning into a world chief by 2030. And while the United States government is putting just $1.1 billion in non-classified AI research, its private sector is burning through billions in fields from finance and healthcare to retail and defense.


Governing AI: An Inside Look at the Quest to Ensure AI Benefits Humanity - Future of Life Institute

#artificialintelligence

Finance, education, medicine, programming, the arts -- artificial intelligence is set to disrupt nearly every sector of our society. Governments and policy experts have started to realize that, in order to prepare for this future, in order to minimize the risks and ensure that AI benefits humanity, we need to start planning for the arrival of advanced AI systems today. Although we are still in the early moments of this movement, the landscape looks promising. Several nations and independent firms have already started to strategize and develop polices for the governance of AI. Last year, the UAE appointed the world's first Minister of Artificial Intelligence, and Germany took smaller, but similar, steps in 2017, when the Ethics Commission at the German Ministry of Transport and Digital Infrastructure developed the world's first set of regulatory guidelines for automated and connected driving.


Securing the internet of things means using markets, not mandates

@machinelearnbot

A surprising number of everyday devices are now connected to the internet. And it's not just your Amazon Echo or Google Home; it may be your thermostat, your car, or even your toaster. These devices, and many more like them, make up the "internet of things" (IoT). Though new, these devices are proving quite useful to businesses and consumers. However, the proliferation of billions of new connected devices also presents novel security threats that demand serious attention.


DeepMind-Royal Free deal is "cautionary tale" for healthcare in the algorithmic age

#artificialintelligence

Researchers studying a deal in which Google's artificial intelligence subsidiary, DeepMind, acquired access to millions of sensitive NHS patient records have warned that more must be done to regulate data transfers from public bodies to private firms. The academic study says that "inexcusable" mistakes were made when, in 2015, the Royal Free NHS Foundation Trust in London signed an agreement with Google DeepMind. This allowed the British AI firm to analyse sensitive information about 1.6 million patients who use the Trust's hospitals each year. The access was used for monitoring software for mobile devices, called Streams, which promises to improve clinicians' ability to support patients with Acute Kidney Injury (AKI). But according to the study's authors, the purposes stated in the agreement were far less specific, and made more open-ended references to using data to improve services.


DeepMind-Royal Free deal is 'cautionary tale' for health care in the algorithmic age

#artificialintelligence

Researchers studying a deal in which Google's artificial intelligence subsidiary, DeepMind, acquired access to millions of sensitive NHS patient records have warned that more must be done to regulate data transfers from public bodies to private firms. The academic study says that "inexcusable" mistakes were made when, in 2015, the Royal Free NHS Foundation Trust in London signed an agreement with Google DeepMind. This allowed the British AI firm to access sensitive information about 1.6 million patients who use the Trust's hospitals each year. The access was used to create monitoring software for mobile devices, called Streams, which promises to improve clinicians' ability to support patients with Acute Kidney Injury (AKI). But according to the study's authors, the purposes stated in the agreement were far less specific, and made more open-ended references to using data to improve services.